传染媒介符号架构将高维传染料空间与一组精心设计的操作员组合起来,以便使用大型数字向量进行符号计算。主要目标是利用他们的代表权力和处理模糊和歧义的能力。在过去几年中,已经提出了几个VSA实现。可用的实现在底层矢量空间和VSA运算符的特定实现中不同。本文概述了十一可用的VSA实现,并讨论了其潜在的矢量空间和运营商的共性和差异。我们创建了一种可用绑定操作的分类,并使用来自类比推理的示例来显示非自逆绑定操作的重要分支。主要贡献是可用实施的实验比较,以便评估(1)捆绑的容量,(2)非精确解除界操作的近似质量,(3)组合绑定和捆绑操作对查询的影响回答性能,(4)两个示例应用程序的性能:视觉地位和语言识别。我们预计此比较和系统化与VSA的开发相关,并支持选择特定任务的适当VSA。实现可用。
translated by 谷歌翻译
Progress on many Natural Language Processing (NLP) tasks, such as text classification, is driven by objective, reproducible and scalable evaluation via publicly available benchmarks. However, these are not always representative of real-world scenarios where text classifiers are employed, such as sentiment analysis or misinformation detection. In this position paper, we put forward two points that aim to alleviate this problem. First, we propose to extend text classification benchmarks to evaluate the explainability of text classifiers. We review challenges associated with objectively evaluating the capabilities to produce valid explanations which leads us to the second main point: We propose to ground these benchmarks in human-centred applications, for example by using social media, gamification or to learn explainability metrics from human judgements.
translated by 谷歌翻译
State-of-the-art deep-learning-based approaches to Natural Language Processing (NLP) are credited with various capabilities that involve reasoning with natural language texts. In this paper we carry out a large-scale empirical study investigating the detection of formally valid inferences in controlled fragments of natural language for which the satisfiability problem becomes increasingly complex. We find that, while transformer-based language models perform surprisingly well in these scenarios, a deeper analysis re-veals that they appear to overfit to superficial patterns in the data rather than acquiring the logical principles governing the reasoning in these fragments.
translated by 谷歌翻译
Model-Based Reinforcement Learning (RL) is widely believed to have the potential to improve sample efficiency by allowing an agent to synthesize large amounts of imagined experience. Experience Replay (ER) can be considered a simple kind of model, which has proved extremely effective at improving the stability and efficiency of deep RL. In principle, a learned parametric model could improve on ER by generalizing from real experience to augment the dataset with additional plausible experience. However, owing to the many design choices involved in empirically successful algorithms, it can be very hard to establish where the benefits are actually coming from. Here, we provide theoretical and empirical insight into when, and how, we can expect data generated by a learned model to be useful. First, we provide a general theorem motivating how learning a model as an intermediate step can narrow down the set of possible value functions more than learning a value function directly from data using the Bellman equation. Second, we provide an illustrative example showing empirically how a similar effect occurs in a more concrete setting with neural network function approximation. Finally, we provide extensive experiments showing the benefit of model-based learning for online RL in environments with combinatorial complexity, but factored structure that allows a learned model to generalize. In these experiments, we take care to control for other factors in order to isolate, insofar as possible, the benefit of using experience generated by a learned model relative to ER alone.
translated by 谷歌翻译
Recently, attempts have been made to reduce annotation requirements in feature-based self-explanatory models for lung nodule diagnosis. As a representative, cRedAnno achieves competitive performance with considerably reduced annotation needs by introducing self-supervised contrastive learning to do unsupervised feature extraction. However, it exhibits unstable performance under scarce annotation conditions. To improve the accuracy and robustness of cRedAnno, we propose an annotation exploitation mechanism by conducting semi-supervised active learning with sparse seeding and training quenching in the learned semantically meaningful reasoning space to jointly utilise the extracted features, annotations, and unlabelled data. The proposed approach achieves comparable or even higher malignancy prediction accuracy with 10x fewer annotations, meanwhile showing better robustness and nodule attribute prediction accuracy under the condition of 1% annotations. Our complete code is open-source available: https://github.com/diku-dk/credanno.
translated by 谷歌翻译
机器学习,特别是深度学习方法在许多模式识别和数据处理问题,游戏玩法中都优于人类的能力,现在在科学发现中也起着越来越重要的作用。机器学习在分子科学中的关键应用是通过使用密度函数理论,耦合群或其他量子化学方法获得的电子schr \“ odinger方程的Ab-Initio溶液中的势能表面或力场。我们回顾了一种最新和互补的方法:使用机器学习来辅助从第一原理中直接解决量子化学问题。具体来说,我们专注于使用神经网络ANSATZ功能的量子蒙特卡洛(QMC)方法,以解决电子SCHR \ “ Odinger方程在第一和第二量化中,计算场和激发态,并概括多个核构型。与现有的量子化学方法相比,这些新的深QMC方法具有以相对适度的计算成本生成高度准确的Schr \“ Odinger方程的溶液。
translated by 谷歌翻译
胸部计算机断层扫描(CT)成像为肺部传染病(如结核病(TB))的诊断和管理增添了宝贵的见解。但是,由于成本和资源的限制,只有X射线图像可用于初步诊断或在治疗过程中进行后续比较成像。由于其投影性,X射线图像可能更难解释临床医生。缺乏公开配对的X射线和CT图像数据集使训练3D重建模型的挑战。此外,胸部X射线放射学可能依赖具有不同图像质量的不同设备方式,并且潜在的种群疾病谱可能会在输入中产生多样性。我们提出了形状诱导,也就是说,在没有CT监督的情况下从X射线中学习3D CT的形状,作为一种新型技术,可以在训练重建模型的训练过程中结合现实的X射线分布。我们的实验表明,这一过程既提高了产生的CT的感知质量,也可以提高肺传染病的下游分类的准确性。
translated by 谷歌翻译
准确的几何表示对于开发有限元模型至关重要。尽管通常只有很少的数据在准确细分精美特征,例如缝隙和薄结构方面,虽然只有很少的数据就有良好的深度学习分割方法。随后,分段的几何形状需要劳动密集型手动修改,以达到可用于模拟目的的质量。我们提出了一种使用转移学习来重复使用分段差的数据集的策略,并结合了交互式学习步骤,其中数据对数据进行微调导致解剖上精确的分割适合模拟。我们使用改良的多平台UNET,该UNET使用下髋关节分段和专用损耗函数进行预训练,以学习间隙区域和后处理,以纠正由于旋转不变性而在对称类别上的微小不准确性。我们证明了这种可靠但概念上简单的方法,采用了临床验证的髋关节扫描扫描的临床验证结果。代码和结果3D模型可在以下网址提供:\ url {https://github.com/miccai2022-155/autoseg}
translated by 谷歌翻译
本文介绍了我们在服务机器人中使用工业4.0资产管理壳(AASS)。我们将AASS与服务机器人的软件组件以及完整的服务机器人系统一起使用。软件组件的AAs用作标准化的数字数据表。它可以在设计时间帮助系统构建器查找和选择匹配要构建系统的系统级要求的软件组件。系统的AAS包括用于系统的数据表,并在运行时运行数据收集,并允许对服务机器人的技能级别命令。作为我们的模型驱动开发和服务机器人技术的组成工作流程的一部分,AAS是生成和填充的。AASS可以作为标准化集成和与服务机器人交互的关键推动器。
translated by 谷歌翻译
价值迭代(VI)是一种基础动态编程方法,对于最佳控制和强化学习的学习和计划很重要。 VI分批进行,其中必须完成对每个状态值的更新,然后才能开始下一批更新。如果状态空间较大,完成单批次的昂贵,那么对于许多应用来说,VI不切实际。异步VI通过一次,就地和任意顺序一次更新一个状态来帮助解决大型状态空间问题。但是,异步VI仍然需要在整个动作空间上最大化,这使得对具有较大动作空间的域不切实际。为了解决这个问题,我们提出了双重同步价值迭代(DAVI),这是一种新算法,将异步从各州到州和行动的概念推广。更具体地说,DAVI在可以使用用户定义的大小的采样子集上最大化。使用采样来减少计算的这种简单方法使VI具有类似吸引人的理论属性,而无需等待每个更新中的整个动作空间进行全面扫描。在本文中,我们显示了DAVI收敛到最佳值函数,概率是,以接近几何的速率与概率1-delta收敛,并在计算时间中返回近乎最佳的策略,该策略几乎与先前建立的对VI结合的限制。我们还从经验上证明了Davi在几个实验中的有效性。
translated by 谷歌翻译